20 research outputs found
Recommended from our members
Generating natural language descriptions of Z test cases
Critical software most often requires an independent validation and verification (IVV). IVV is usually performed by domain experts, who are not familiar with specific, many times formal, development technologies. In addition, model-based testing (MBT) is a promising testing technique for the verification of critical software. Test cases generated by MBT tools are logical descriptions. The problem is, then, to provide natural language (NL) descriptions of these test cases, making them accessible to domain experts. In this paper, we present ongoing research aimed at finding a suitable method for generating NL descriptions from test cases in a formal specification language. A first prototype has been developed and applied to a real-world project in the aerospace sector
Recommended from our members
Democratic Replay: Enhancing TV Election Debates with Interactive Visualisations
This paper presents an online platform for enhancing televised election debates with interactive visualisations. Election debates are one of the highlights of election campaigns worldwide. They are also often criticised as appearing scripted, rehearsed, detached from much of the electorate, and at times too complex. Democratic Replay enhances videos of election debates with a collection of interactive tools aimed at providing a replay experience centred around citizens' needs. We present the system requirements, design and implementation, and report on an evaluation based on the ITV Leaders' Debate from the 2015 UK General Election campaign
Recommended from our members
A Computational Model of Non-Cooperation in Natural Language Dialogue
A common assumption in the study of conversation is that participants fully cooperate in order to maximise the effectiveness of the exchange and ensure communication flow. This assumption persists even in situations in which the private goals of the participants are at odds: they may act strategically pursuing their agendas, but will still adhere to a number of linguistic norms or conventions which are implicitly accepted by a community of language users.
However, in naturally occurring dialogue participants often depart from such norms, for instance, by asking inappropriate questions, by avoiding to provide adequate answers or by volunteering information that is not relevant to the conversation. These are examples of what we call linguistic non-cooperation.
This thesis presents a systematic investigation of linguistic non-cooperation in dialogue. Given a specific activity, in a specific cultural context and time, the method proceeds by making explicit which linguistic behaviours are appropriate. This results in a set of rules: the global dialogue game. Non-cooperation is then measured as instances in which the actions of the participants are not in accordance with these rules. The dialogue game is formally defined in terms of discourse obligations. These are actions that participants are expected to perform at a given point in the dialogue based on the dialogue history. In this context, non-cooperation amounts to participants failing to act according to their obligations.
We propose a general definition of linguistic non-cooperation and give a specific instance for political interview dialogues. Based on the latter, we present an empirical method which involves a coding scheme for the manual annotation of interview transcripts. The degree to which each participant cooperates is automatically determined by contrasting the annotated transcripts with the rules in the dialogue game for political interviews. The approach is evaluated on a corpus of broadcast political interviews and tested for correlation with human judgement on the same corpus.
Further, we describe a model of conversational agents that incorporates the concepts and mechanisms above as part of their dialogue manager. This allows for the generation of conversations in which the agents exhibit varying degrees of cooperation by controlling how often they favour their private goals instead of discharging their discourse obligations
Recommended from our members
Modelling non-cooperative dialogue: the role of conversational games and discourse obligations
We describe ongoing research towards modelling dialogue management for conversational agents that can exhibit and cope with non-cooperative behaviour. Empirical studies of conventional dialogue behaviour in the domain of political interviews and a coarse-grained notion of conversational games re used to characterise non-cooperation. We propose an agent architecture that combines conversational games and discourse obligations, and suggest an implementation
Augmenting Public Deliberations through Stream Argument Analytics and Visualisations
Public deliberations are organised by governments and other large institutions to take the views of citizens around controversial issues. Increasing public demand and the associated burden on public funding make the quality of public deliberation events and their outcomes critical to modern democracies. This paper focuses on technology developed around streams of computational argument data intended to inform and improve deliberative communication in real time. Combining state-of-the-art speech recognition, argument mining, and analytics, we produce dynamic, interactive visualisations intended for non-experts, deployed incrementally in real time to deliberation participants via large screens, hand-held and personal computing devices. The goal is to bridge the gap between theoretical criteria on deliberation quality from the political sciences and objective analytics calculated automatically from computable argument data in actual public deliberations, presented as a set of visualisations which work on stream data and are simple, yet informative enough to make a positive impact on deliberative outcomes
ADD-up:Visual analytics for augmented deliberative democracy
We demonstrate the first prototype of the ADD-up visual analytics system. The Augmented Deliberative Democracy (ADD-up) project aims to enhance public deliberations by providing argument analytics in real time. The system will ultimately take a stenographic feed of a public deliberation meeting, automatically extract the arguments therein and project visual analytics intended to improve the deliberative quality of the event.publishe
Debating Technology for Dialogical Argument:Sensemaking, Engagement and Analytics
Debating technologies, a newly emerging strand of research into computational technologies to support human debating, offer a powerful way of providing naturalistic, dialogue-based interaction with complex information spaces. The full potential of debating technologies for dialogical argument can, however, only be realized once key technical and engineering challenges are overcome, namely data structure, data availability, and interoperability between components. Our aim in this article is to show that the Argument Web, a vision for integrated, reusable, semantically rich resources connecting views, opinions, arguments, and debates online, offers a solution to these challenges. Through the use of a running example taken from the domain of citizen dialogue, we demonstrate for the first time that different Argument Web components focusing on sensemaking, engagement, and analytics can work in concert as a suite of debating technologies for rich, complex, dialogical argument
Recommended from our members
Non-cooperation in dialogue
This paper presents ongoing research on computational models for non-cooperative dialogue. We start by analysing different levels of cooperation in conversation. Then, inspired by findings from an empirical study, we propose a technique for measuring non-cooperation in political interviews. Finally, we describe a research programme towards obtaining a suitable model and discuss previous accounts for conflictive dialogue, identifying the differences with our work
Recommended from our members
Measuring Non-cooperation in Dialogue
This paper introduces a novel method for measuring non-cooperation in dialogue. The key idea is that linguistic non-cooperation can be measured in terms of the extent to which dialogue participants deviate from conventions regarding the proper introduction and discharging of conversational obligations (e.g., the obligation to respond to a question). Previous work on non-cooperation has focused mainly on non-linguistic task-related non-cooperation or modelled non-cooperation in terms of special rules describing non-cooperative behaviours. In contrast, we start from rules for normal/correct dialogue behaviour – i.e., a dialogue game – which in principle can be derived from a corpus of cooperative dialogues, and provide a quantitative measure for the degree to which participants comply with these rules. We evaluated the model on a corpus of political interviews, with encouraging results. The model predicts accurately the degree of cooperation for one of the two dialogue game roles (interviewer) and also the relative cooperation for both roles (i.e., which interlocutor in the conversation was most cooperative). Being able to measure cooperation has applications in many areas from the analysis – manual, semi and fully automatic – of natural language interactions to human-like virtual personal assistants, tutoring agents, sophisticated dialogue systems, and role-playing virtual humans
Recommended from our members
Towards a Computational Pragmatics for Non-Cooperative Dialogue [PhD Probation Report]
Most work in linguistics has approached dialogue on the assumption that participants share a common goal and cooperate to achieve it by means of conversation. In computational linguistics this assumption is even stronger. For instance, most dialogue systems rely on the interlocutor's full cooperation to model interaction. The research described here is aimed at the other cases, at those escaping the norms. Failure to cooperate can happen for many reasons. A non-native speaker trying to engage in a complex discussion might provide contributions which are not as clear and precise as would be expected. A student not quite sure about the topic he is supposed to elaborate on in an oral examination might provide information which is not entirely truthful or relevant. Someone suffering from dementia might produce utterances which are irrelevant or uninformative for the current exchange. These examples have to do with incompetence, ignorance and irrationality, all of which lie outside the scope of our study. We will focus on situations in which non-cooperative conversational behaviour is rational, competent and well-informed. This report is part of the first-year probation assessment for a full-time Ph.D. programme. It provides details about the proposed research question, a review of the relevant literature, the proposed research methodology and a work plan